Search Results: "gfa"

31 October 2016

Antoine Beaupr : My free software activities, October 2016

Debian Long Term Support (LTS) This is my 7th month working on Debian LTS, started by Raphael Hertzog at Freexian, after a long pause during the summer. I have worked on the following packages and CVEs: I have also helped review work on the following packages:
  • imagemagick: reviewed BenH's work to figure out what was done. unfortunately, I forgot to officially take on the package and Roberto started working on it in the meantime. I nevertheless took time to review Roberto's work and outline possible issues with the original patchset suggested
  • tiff: reviewed Raphael's work on the hairy TIFFTAG_* issues, all the gory details in this email
The work on ImageMagick and GraphicsMagick was particularly intriguing. Looking at the source of those programs makes me wonder why were are still using them at all: it's a tangled mess of C code that is bound to bring up more and more vulnerabilities, time after time. It seems there's always an "Magick" vulnerability waiting to be fixed out there... I somehow hoped that the fork would bring more stability and reliability, but it seems they are suffering from similar issues because, fundamentally, they haven't rewritten ImageMagick... It looks this is something that affects all image programs. The review I have done on the tiff suite give me the same shivering sensation as reviewing the "Magick" code. It feels like all image libraries are poorly implemented and then bound to be exploited somehow... Nevertheless, if I had to use a library of the sort in my software, I would stay away from the "Magick" forks and try something like imlib2 first... Finally, I also did some minor work on the user and developer LTS documentation and some triage work on samba, xen and libass. I also looked at the dreaded CVE-2016-7117 vulnerability in the Linux kernel to verify its impact on wheezy users. I also looked at implementing a --lts flag for dch (see bug #762715). It was difficult to get back to work after such a long pause, but I am happy I was able to contribute a significant number of hours. It's a bit difficult to find work sometimes in LTS-land, even if there's actually always a lot of work to be done. For example, I used to be one of the people doing frontdesk work, but those duties are now assigned until the end of the year, so it's unlikely I will be doing any of that for the forseable future. Similarly, a lot of packages were assigned when I started looking at the available packages. There was an interesting discussion on the internal mailing list regarding unlocking package ownership, because some people had packages locked for weeks, sometimes months, without significant activity. Hopefully that situation will improve after that discussion. Another interesting discussion I participated in is the question of whether the LTS team should be waiting for unstable to be fixed before publishing fixes in oldstable. It seems the consensus right now is that it shouldn't be mandatory to fix issues in unstable before we fix security isssues in oldstable and stable. After all, security support for testing and unstable is limited. But I was happy to learn that working on brand new patches is part of our mandate as part of the LTS work. I did work on such a patch for tar which ended up being adopted by the original reporter, although upstream ended up implementing our recommendation in a better way. It's coincidentally the first time since I start working on LTS that I didn't get the number of requested hours, which means that there are more people working on LTS. That is a good thing, but I am worried it may also mean people are more spread out and less capable of focusing for longer periods of time on more difficult problems. It also means that the team is growing faster than the funding, which is unfortunate: now is a good time as any to remind you to see if you can make your company fund the LTS project if you are still running Debian wheezy.

Other free software work It seems like forever that I did such a report, and while I was on vacation, a lot has happened since the last report.

Monkeysign I have done extensive work on Monkeysign, trying to bring it kicking and screaming in the new world of GnuPG 2.1. This was the objective of the 2.1 release, which collected about two years of work and patches, including arbitrary MUA support (e.g. Thunderbird), config files support, and a release on PyPI. I have had to release about 4 more releases to try and fix the build chain, ship the test suite with the program and have a primitive preferences panel. The 2.2 release also finally features Tor suport! I am also happy to have moved more documentation to Read the docs, part of which I mentionned in in a previous article. The git repositories and issues were also moved to a Gitlab instance which will hopefully improve the collaboration workflow, although we still have issues in streamlining the merge request workflow. All in all, I am happy to be working on Monkeysign, but it has been a frustrating experience. In the last years, I have been maintaining the project largely on my own: although there are about 20 contributors in Monkeysign, I have committed over 90% of the commits in the code. New contributors recently showed up, and I hope this will release some pressure on me being the sole maintainer, but I am not sure how viable the project is.

Funding free software work More and more, I wonder how to sustain my contributions to free software. As a previous article has shown, I work a lot on the computer, even when I am not on a full-time job. Monkeysign has been a significant time drain in the last months, and I have done this work on a completely volunteer basis. I wouldn't mind so much except that there is a lot of work I do on a volunteer basis. This means that I sometimes must prioritize paid consulting work, at the expense of those volunteer projects. While most of my paid work usually revolves around free sofware, the benefits of paid work are not always immediately obvious, as the primary objective is to deliver to the customer, and the community as a whole is somewhat of a side-effect. I have watched with interest joeyh's adventures into crowdfunding which seems to be working pretty well for him. Unfortunately, I cannot claim the incredible (and well-deserved) reputation Joey has, and even if I could, I can't live with 500$ a month. I would love to hear if people would be interested in funding my work in such a way. I am hesitant in launching a crowdfunding campaign because it is difficult to identify what exactly I am working on from one month to the next. Looking back at earlier reports shows that I am all over the place: one month I'll work on a Perl Wiki (Ikiwiki), the next one I'll be hacking at a multimedia home cinema (Kodi). I can hardly think of how to fund those things short of "just give me money to work on anything I feel like", which I can hardly ask for of anyone. Even worse, it feels like the audience here is either friends or colleagues. It would make little sense for me to seek funding from those people: colleagues have the same funding problems I do, and I don't want to empoverish my friends... So far I have taken the approach of trying to get funding for work I am doing, bit by bit. For example, I have recently been told that LWN actually pays for contributed articles and have started running articles by them before publishing them here. This is looking good: they will publish an article I wrote about the Omnia router I have recently received. I give them exclusive rights on the article for two weeks, but I otherwise retain full ownership over the article and will publish them after the exclusive period here. Hopefully, I will be able to find more such projects that pays for the work I do on a day to day basis.

Open Street Map editing I have ramped up my OpenStreetMap contributions, having (temporarily) moved to a different location. There are lots of things to map here: trails, gaz stations and lots of other things are missing from the map. Sometimes the effort looks a bit ridiculous, reminding me of my early days of editing OSM. I have registered to OSM Live, a project to fund OSM editors that, I must admit, doesn't help much with funding my work: with the hundreds of edits I did in October, I received the equivalent of 1.80$CAD in Bitcoins. This may be the lowest hourly salary I have ever received, probably going at a rate of 10 per hour! Still, it's interesting to be able to point people to the project if someone wants to contribute to OSM mappers. But mappers should have no illusions about getting a decent salary from this effort, I am sorry to say.

Bounties I feel this is similar to the "bounty" model used by the Borg project: I claimed around $80USD in that project for what probably amounts to tens of hours of work, yet another salary that would qualify as "poor". Another example is a feature I would like to implement in Borg: support for protocols other than SSH. There is currently no bounty on this, but a similar feature, S3 support has one of the largest bounties Borg has ever seen: $225USD. And the claimant for the bounty hasn't actually implemented the feature, instead backing up to S3, the patch (to a third-party tool) actually enables support for Amazon Cloud Drive, a completely different API. Even at $225, I wouldn't be able to complete any of those features and get a decent salary. As well explained by the Snowdrift reviews, bounties just don't work at all... The ludicrous 10% fee charged by Bountysource made sure I would never do business with them ever again anyways.

Other work There are probably more things I did recently, but I am having difficulty keeping track of the last 5 months of on and off work, so you will forgive that I am not as exhaustive as I usually am.

14 October 2016

Mike Gabriel: [Arctica Project] Release of nx-libs (version 3.5.99.2)

Introduction NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one. NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs". Release Announcement On Thursday, Oct 13th, version 3.5.99.2 of nx-libs has been released [1]. This release brings a major backport of libNX_X11 to the status of libX11 1.3.4 (as provided by X.org). On top of that, all CVE fixes provided for libX11 by the Debian X11 Strike Force and the Debian LTS team got cherry-picked to libNX_X11, too. This big chunk of work has been performed by Ulrich Sibiller and there is more to come. We currently have a pull request pending review that backports more commits from libX11 (bumping the status of libNX_X11 to the state of libX11 1.6.4, which is the current HEAD on the X.org Git site). Another big clean-up performed by Ulrich is the split-up of XKB code which got symlinked between libNX_X11 and nx-X11/programs/Xserver. This brings in some code duplications but allows maintaing the nxagent Xserver code and the libNX_X11 code separately. In the upstream ChangeLog you will find some more items around code clean-ups and .deb packaging, see the diff [2] on the ChangeLog file for details. So for this releas, a very special and massive thanks goes to Ulrich Sibiller!!! Well done!!! Change Log A list of recent changes (since 3.5.99.1) can be obtained from here. Known Issues This version of nx-libs is known to segfault when LDFLAGS / CFLAGS have the -pie / -fPIE hardening flags set. This issue is currently under investigation. Binary Builds You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs: Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server:
 wget -qO - http://packages.arctica-project.org/archive.key   sudo apt-key add -
The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component). Ubuntu developers, please note: we have added nightly builds for Ubuntu latest to our build server. This has been Ubuntu 16.10 so far, but we will soon drop 16.10 support in nightly builds and add 17.04 support. References

2 October 2016

Gregor Herrmann: RC bugs 2016/38-39

the last two weeks have seen the migration of perl 5.24 into testing, most of the bugs I worked on were related to it. additionally a few more build dependencies on tzdata werde needed. here's the list:

15 September 2016

Craig Sanders: Frankenwheezy! Keeping wheezy alive on a container host running libc6 2.24

It s Alive! The day before yesterday (at Infoxchange, a non-profit whose mission is Technology for Social Justice , where I do a few days/week of volunteer systems & dev work), I had to build a docker container based on an ancient wheezy image. It built fine, and I got on with working with it. Yesterday, I tried to get it built on my docker machine here at home so I could keep working on it, but the damn thing just wouldn t build. At first I thought it was something to do with networking, because running curl in the Dockerfile was the point where it was crashing but it turned out that many programs would segfault e.g. it couldn t run bash, but sh (dash) was OK. I also tried running a squeeze image, and that had the same problem. A jessie image worked fine (but the important legacy app we need wheezy for doesn t yet run in jessie). After a fair bit of investigation, it turned out that the only significant difference between my workstation at IX and my docker machine at home was that I d upgraded my home machines to libc6 2.24-2 a few days ago, whereas my IX workstation (also running sid) was still on libc6 2.23. Anyway, the point of all this is that if anyone else needs to run a wheezy on a docker host running libc6 2.24 (which will be quite common soon enough), you have to upgrade libc6 and related packages (and any -dev packages, including libc6-dev, you might need in your container that are dependant on the specific version of libc6). In my case, I was using docker but I expect that other container systems will have the same problem and the same solution: install libc6 from jessie into wheezy. Also, I haven t actually tested installing jessie s libc6 on squeeze if it works, I expect it ll require a lot of extra stuff to be installed too. I built a new frankenwheezy image that had libc6 2.19-18+deb8u4 from jessie. To build it, I had to use a system which hadn t already been upgraded to libc6 2.24. I had already upgraded libc6 on all the machines on my home network. Fortunately, I still had my old VM that I created when I first started experimenting with docker crazily, it was a VM with two ZFS ZVOLs, a small /dev/vda OS/boot disk, and a larger /dev/vdb mounted as /var/lib/docker. The crazy part is that /dev/vdb was formatted as btrfs (mostly because it seemed a much better choice than aufs). Disk performance wasn t great, but it was OK and it worked. Docker has native support for ZFS, so that s what I m using on my real hardware. I started with the base wheezy image we re using and created a Dockerfile etc to update it. First, I added deb lines to the /etc/apt/sources.list for my local jessie and jessie-updates mirror, then I added the following line to /etc/apt/apt.conf:
APT::Default-Release "wheezy";
Without that, any other apt-get installs in the Dockerfile will install from jesssie rather than wheezy, which will almost certainly break the legacy app. I forgot to do it the first time, and had to waste another 10 minutes or so building the app s container again. I then installed the following:
apt-get -t jessie install libc6 locales libc6-dev krb5-multidev comerr-dev zlib1g-dev libssl-dev libpq-dev
To minimise the risk of incompatible updates, it s best to install the bare minimum of jessie packages required to get your app running. The only reason I needed to install all of those -dev packages was because we needed libpq-dev, which pulled in all the rest. If your app doesn t need to talk to postgresql, you can skip them. In fact, I probably should try to build it again without them I added them after the first build failed but before I remembered to set Apt::Default::Release (OTOH, it s working OK now and we re probably better off with libssl-dev from jessie anyway). Once it built successfully, I exported the image to a tar file, copied it back to my real Docker machine (co-incidentally, the same machine with the docker VM installed) and imported it into docker there and tested it to make sure it didn t have the same segfault issues that the original wheezy image did. No problem, it worked perfectly. That worked, so I edited the FROM line in the Dockerfile for our wheezy app to use frankenwheezy and ran make build. It built, passed tests, deployed and is running. Now I can continue working on the feature I m adding to it, but I expect there ll be a few more yaks to shave before I m finished. When I finish what I m currently working on, I ll take a look at what needs to be done to get this app running on jessie. It s on the TODO list at work, but everyone else is too busy a perfect job for an unpaid volunteer. Wheezy s getting too old to keep using, and this frankenwheezy needs to float away on an iceberg. Frankenwheezy! Keeping wheezy alive on a container host running libc6 2.24 is a post from: Errata

2 September 2016

Julian Andres Klode: apt 1.3 RC4 Tweaking apt update

Did that ever happen to you: You run apt update, it fetches a Release file, then starts fetching DEP-11 metadata, then any pdiff index stuff, and then applies them; all after another? Or this: You don t see any update progress until very near the end? Worry no more: I tweaked things a bit in 1.3~rc4 (git commit). Prior to 1.3~rc4, acquiring the files for an update worked like this: We create some object for the Release file, once a release file is done we queue any next object (DEP-11 icons, .diff/Index files, etc). There is no prioritizing, so usually we fetch the 5MB+ DEP-11 icons and components files first, and only then start working on other indices which might use Pdiff. In 1.3~rc4 I changed the queues to be priority queues: Release files and .diff/Index files have the highest priority (once we have them all, we know how much to fetch). The second level of priority goes to the .pdiff files which are later on passed to the rred process to patch an existing Packages, Sources, or Contents file. The third priority level is taken by all other index targets. Actually, I implemented the priority queues back in Jun. There was just one tiny problem: Pipelining. We might be inserting elements into our fetching queues in order of priority, but with pipelining enabled, stuff of lower priority might already have their HTTP request sent before we even get to queue the higher priority stuff. Today I had an epiphany: We fill the pipeline up to a number of items (the depth, currently 10). So, let s just fill the pipeline with items that have the same (or higher) priority than the maximum priority of the already-queued ones; and pretend it is full when we only have lower priority items. And that works fine: First the Release and .diff/Index stuff is fetched, which means we can start showing accurate progress info from there one. Next, the pdiff files are fetched, meaning that we can apply them in parallel to any targets downloading later in parallel (think DEP-11 icon tarballs). This has a great effect on performance: For the 01 Sep 2016 03:35:23 UTC -> 02 Sep 2016 09:25:37 update of Debian unstable and testing with Contents and appstream for amd64 and i386, update time reduced from 37 seconds to 24-28 seconds. In other news I recently cleaned up the apt packaging which renamed /usr/share/bug/apt/script to /usr/share/bug/apt. That broke on overlayfs, because dpkg could not rename the old apt directory to a backup name during unpack (only directories purely on the upper layer can be renamed). I reverted that now, so all future updates should be fine. David re-added the Breaks against apt-utils I recently removed by accident during the cleanup, so no more errors about overriding dump solvers. He also added support for fingerprints in gpgv s GOODSIG output, which apparently might come at some point. I Also fixed a few CMake issues, fixed the test suite for gpgv 2.1.15, allow building with a system-wide gtest library (we really ought to add back a pre-built one in Debian), and modified debian/rules to pass -O to make. I wish debhelper would do the latter automatically (there s a bug for that). Finally, we fixed some uninitialized variables in the base256 code, out-of-bound reads in the Sources file parser, off-by-one errors in the tagfile comment stripping code[1], and some memcpy() with length 0. Most of these will be cherry-picked into the 1.2 (xenial) and 1.0.9.8 (jessie) branches (releases 1.2.15 and 1.0.9.8.4). If you forked off your version of apt at another point, you might want to do the same. [1] those were actually causing the failures and segfaults in the unit tests on hurd-i386 buildds. I always thought it was a hurd-specific issue PS. Building for Fedora on OBS has a weird socket fd #3 that does not get closed during the test suite despite us setting CLOEXEC on it. Join us in #debian-apt on oftc if you have ideas.
Filed under: Debian, Ubuntu

9 August 2016

Reproducible builds folks: Finishing the final variations for reprotest

Author: ceridwen I've been working on getting the last of the variations working. With no responses on the mailing list from anyone outside Debian and with limited time remaining, with Lunar I've decided to deemphasize it.
  1. Build path is done.
  2. Host and domain name use domainname and hostname. This site is old, but it indicates that domainname was available on most OSes and hostname was available everywhere as of 2004. Prebuilder uses a Linux-specific utility (unshare --uts) to run this variation in a chroot, but I'm not doing this for reprotest: if you want this variation, use qemu.
  3. User/group will not be portable, because they'll rely on useradd/groupadd and su. useradd and groupadd work on many but not all OSes, notably not including FreeBSD or MacOS X. su was universal in 2004.
  4. Time is not done but will probably be portable to some systems, because it will rely on date -s. Unfortunately, I haven't been able to find any information on how common date -s is across Unix-like OSes, as the -s option is not part of the POSIX standard.
  5. At the moment, I have no idea how to implement changes for /bin/sh and the login shell that will even work across different distributions of Linux, much less different OSes. There are a couple of key problems, starting with the need to find two different shells to use, because there's no way to find out what shells are installed. This blog post explains why /etc/shells doesn't work well for finding what shells are available: not everything in /etc/shells is necessarily a shell (Ubuntu has /usr/bin/screen) and not all available shells are in /etc/shells. Also, there's no good way to find out what shell is the system default because /bin/sh can be an arbitrary binary, not a symlink,and no good way to identify what it is if it is a binary. I can hard-code shell choices, but which shells? bash is obvious for Linux, but what's the best second choice?
On other topics:
  1. reprotest fails to build properly on jessie: Lunar said, and I agree, that fixing this is not a priority. I need someone with more knowledge of Debian Python packaging to help me. If I'm going to support old versions, I also need some kind of CI server, because I don't have the time or ability to maintain old versions of Debian and Python myself.
  2. libc-bin: ldconfig segfaults when run using "setarch uname26": I don't have a good solution for this, but I don't want to hard-code and maintain an architecture-specific disable. Would changing the argument to setarch work around the segfault? Is there a way to test for the presence of the bug that won't cause a segfault or similar crash?
  3. Please put adt-virt-* binaries back onto PATH: reprotest is not affected by this change because I forked the autopkgtest code rather than depending on it, so that reprotest can be installed through PyPi. At the moment, reprotest doesn't make its versions of the programs in virt/ available on $PATH. This is primarily because of the problems with distributing command-line scripts with setuptools. The approach I'm currently using, including the virt/ programs as non-code data with include reprotest/virt/* in MANIFEST.in, doesn't install them to make them available for other programs. Using one of the other approaches potentially could, but it's difficult to make the imports work with those approaches. (I haven't found a way to do it.) I think the best solution to this approach is to split autopkgtest into the Debian-specific components and the general-purpose virtualization components, but I don't have the time to do this myself or to negotiate with Martin Pitt, if he'd even be receptive. I'm also unsure at this point if it wouldn't be better for reprotest to switch from autopkgtest to using something like Ansible to run the virtualization, because Ansible has some solved some of the portability problems already and is not tied to Debian.
My goal is to finish the variations (finally), though as this has always proved more difficult than I expected in the past, I don't make any guarantees. Beyond that, I want to start working on finishing docstrings in the reprotest-specific (i.e., not inherited from autopkgtest) code, improving the documentation in general, and improving the tests.

5 August 2016

Steve Kemp: Using the compiler to help you debug segfaults

Recently somebody reported that my console-based mail-client was segfaulting when opening an IMAP folder, and then when they tried with a local Maildir-hierarchy the same fault was observed. I couldn't reproduce the problem at all, as neither my development host (read "my personal desktop"), nor my mail-host had been crashing at all, both being in use to read my email for several months. Debugging crashes with no backtrace, or real hint of where to start, is a challenge. Even when downloading the same Maildir samples I couldn't see a problem. It was only when I decided to see if I could add some more diagnostics to my code that I came across a solution. My intention was to make it easier to receive a backtrace, by adding more compiler options:
  -fsanitize=address -fno-omit-frame-pointer
I added those options and my mail-client immediately started to segfault on my own machine(s), almost as soon as it started. Ultimately I found three pieces of code where I was allocating C++ objects and passing them to the Lua stack, a pretty fundamental part of the code, which were buggy. Once I'd tracked down the areas of code that were broken and fixed them the user was happy, and I was happy too. Its interesting that I've been running for over a year with these bogus things in place, which "just happened" to not crash for me or anybody else. In the future I'll be adding these options to more of my C-based projects, as there seems to be virtually no downside. In related news my console editor has now achieved almost everything I want it to, having gained: The only outstanding feature, which is a biggy, is support for Undo which I need to add. Happily no segfaults here, so far..

7 June 2016

Enrico Zini: You'll thank me later

I agree with this post by Matthew Garrett. I am quite convinced that most of the communities that I have known are vulnerable to people who are good manipulators of people. Also, in my experience, manipulation by negating, pushing, or reframing the boundaries of people tends not to be recognised as manipulation, let alone abusive behaviour. It's not about physically forcing people to do things that they don't want to do. It's about pushing people, again and again, wearing them out, making them feel like, despite their actual needs and wants, saying "yes" to you is the only viable way out. It can happen for sex, and it can happen for getting a patch merged. It can happen out of habit. It can happen for pretty much anything. Consent culture was not part of my education, and it was something I've had to discover for myself. I assume that to be a common experience, and that pushing against boundaries does happen, even without malicious intentions, on a regular basis. However, it is not ok. Take insisting. It is not the same as persisting. Persisting is what I do when I advocate for change. Persisting is what I do when the first version of my code segfaults. Insisting is what I do when a person says "no" to me and I don't want to accept it. Is it ok to insist that a friend, whom you think is sick, goes and gets help? Is it ok to insist that a friend, whom you think is sexually repressed, pushes through their boundaries to explore their sexuality with you? In both cases, one may say, or think, trust me, you'll thank me afterwards. In both cases, what if afterwards I have nothing to thank you for? I see a common pattern in you'll thank me afterwards situations. It can be in good faith, it can be creepy, it can be abusive, and most of the time, what it is, is dangerously unclear to most of the people involved. I think that in a community like Debian, at the level of personal interaction, Insisting is not ok. I think that in a community like Debian, at the level of personal interaction, "You'll thank me afterwards" is not ok. When I say it's not ok I mean that it should not happen. If it happens, people must be free to say "stop". If it doesn't stop, people must expect to be able to easily find support, understanding, and help to make it stop. Just like when people upload untested packages. Pushing against personal boundaries of people is not ok, and pushing against personal boundaries does happen. When you get involved in a new community, such as Debian, find out early where, if that happens, you can find support, understanding, and help to make it stop. If you cannot find any, or if the only thing you can find is people who say "it never happens here", consider whether you really want to be in that community.

27 May 2016

Patrick Matth i: Packages updates from may

There are some news on my packaging work from may:

9 April 2016

Steve Kemp: Recycling old ideas ..

My previous blog post was about fuzzing and finding segfaults in GNU Awk. At the time of this update they still remain unfixed. Reading about a new release of mutt I've seen a lot of complaints about how it handles HTML mail, by shelling out to lynx or w3m. As I have a vested interest in console based mail-clients I wanted to have a quick check to see how dangerous that could be. After all it wasn't so long ago that I discovered that printing a fingerprint of an SSH key could be dangerous, so the idea of parsing untrusted HTML is something I could see. In fact back in 2005 I reported that some specific HTML could crash Mozilla's firefox. Due to some ordering issues my Firefox bug was eventually reported as a duplicate, and although it seemed to qualify for the Mozilla bug-bounty and a CVE assignment I never received any actual cash. Shame. I'd have been more interested in testing the browser if I had a cheque to hang on my wall (and never cash). Anyway full-circle. Fuzzing the w3m console-based browser resulted in a bunch of segfaults when running this:
 w3m -dump $file.html
Anyway each of the two bugs I reported were fixed in a day or two, and both involved gnarly UTF-8/encoding transformations. Many thanks to Tatsuya Kinoshita for such prompt attention and excellent debugging skills. And lynx? Still no segfaults. I'll leave the fuzzer running over the weekend and if there are no faults found by Monday I guess I'll move on to links.

4 April 2016

Raphaël Hertzog: My Free Software Activities in February and March 2016

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it s one of the best ways to find volunteers to work with me on projects that matter to me. I skipped my monthly report last time so this one will cover two months. I will try to list only the most important things to not make it too long.  The Debian Handbook I worked with Ryuunosuke Ayanokouzi to prepare a paperback version of the Japanese translation of my book. Thanks to the efforts of everybody, it s now available. Unfortunately, Lulu declined to take it in distribution program so it won t be available on traditional bookstores (like Amazon, etc.). The reason is that they do not support non-latin character sets in the meta-data. I tried to cheat a little bit by inputting the description in English (still explaining that the book was in Japanese) but they rejected it nevertheless because the English title could mislead people. So the paperback is only available on lulu.com. Fortunately, the shipping costs are reasonable if you pick the most economic offer. Following this I invited the Italian, Spanish and Brazilian Portuguese translators to complete the work (they were close will all the strings already translated, mainly missing translated screenshots and some backcover content) so that we can also release paperback versions in those languages. It s getting close to completion for them. Hopefully we will have those available until next month. Distro Tracker In early February, I tweaked the configuration to send (by email) exceptions generated by incoming mails and by routine task. Before this they were logged but I did not take the time to look into them. This quickly brought a few issues into light and I fixed them as they appeared: for instance the bounce handling code was getting confused when the character case was not respected, and it appears that some emails come back to us after having been lowercased. Also the code was broken when the References field used more than one line on incoming control emails. This brought into light a whole class of problems with the database storing twice the same email with only differing case. So I did further work to merge all those duplicate entries behind a single email entry. Later, the experimental Sources files changed and I had to tweak the code to work with the removal of the Files field (relying instead on Checksums-* to find out the various files part of the entry). At some point, I also fixed the login form to not generate an exception when the user submits an empty form. I also decided that I no longer wanted to support Django 1.7 in distro tracker as Django 1.8 is the current LTS version. I asked the Debian system administrators to update the package on tracker.debian.org with the version in jessie-backports. This allowed me to fix a few deprecation warnings that I kept triggering because I wanted the code to work with Django 1.7. One of those warnings was generated by django-jsonfield though and I could not fix it immediately. Instead I prepared a pull request that I submitted to the upstream author. Oh, and a last thing, I tweaked the CSS to densify the layout on the package page. This was one of the most requested changes from the people who were still preferring packages.qa.debian.org over tracker.debian.org. Kali and new pkg-security team As part of my Kali work, I have been fixing RC bugs in Debian packages that we use in Kali. But in many cases, I stumbled upon packages whose maintainers were really missing in action (MIA). Up to now, we were only doing non-maintainers upload (NMU) but I want to be able to maintain those packages more effectively so we created a new pkg-security team (we re only two right now and we have no documentation yet, but if you want to join, you re welcome, in particular if you maintain a package which is useful in the security field). arm64 work. The first 3 packages that we took over (ssldump, sucrack, xprobe) are actually packages that were missing arm64 builds. We just started our arm64 port on Kali and we fixed them for that architecture. Since they were no longer properly maintained, in most cases it was just a matter of using dh_autoreconf to get up-to-date config. sub,guess files. We still miss a few packages on arm64: vboot-utils (that we will likely take over soon since it s offered for adoption), ruby-libv8 and ruby-therubyracer, ntopng (we have to wait a new luajit which is only in experimental right now). We also noticed that dh-make-golang was not available on arm64, after some discussion on #debian-buildd, I filed two bugs for this: #819472 on dh-make-golang and #819473 on dh-golang. RC bug fixing. hdparm was affected by multiple RC bugs and the release managers were trying to get rid of it from testing. This removed multiple packages that were used by Kali and its users. So I investigated the situation of that package, convinced the current maintainers to orphan it, asked for new maintainers on debian-devel, reviewed multiple updates prepared by the new volunteers and sponsored their work. Now hdparm is again RC-bug free and has the latest upstream version. We also updated jsonpickle to 0.9.3-1 to fix RC bug #812114 (that I forwarded upstream first). Systemd presets support in init-system-helpers. I tried to find someone (to hire) to implement the system preset feature I requested in #772555 but I failed. Still Andreas Henriksson was kind enough to give it a try and sent a first patch. I tried it and found some issues so I continued to improve it and simplify it I submitted an updated patch and pinged Martin Pitt. He pointed me to the DEP-8 test failures that my patch was creating. I quickly fixed those afterwards. This patch is in use in Kali and lets us disable network services by default. I would like to see it merged in Debian so that everybody can setup systemd preset file and have their desire respected at installation time. Misc bug reports. I filed #813801 to request a new upstream release of kismet. Same for masscan in #816644 and for wkhtmltopdf in #816714. We packaged (before Debian) a new upstream release of ruby-msgpack and found out that it was not building on armel/armhf so we filed two upstream tickets (with a suggested fix). In #814805, we asked the pyscard maintainer to reinstate python-pyscard that was dropped (keeping only the Python3 version) as we use the Python 2 version in Kali. And there s more: I filed #816553 (segfault) and #816554 against cdebootstrap. I asked for dh-python to have a better behaviour after having being bitten by the fact that dh with python3 was not doing what I expected it to do (see #818175). And I reported #818907 against live-build since it is failing to handle a package whose name contains an upper case character (it s not policy compliant but dpkg supports them). Misc packaging I uploaded Django 1.9.2 to unstable and 1.8.9 to jessie-backports. I provided the supplementary information that Julien Cristau asked me in #807654 but despite this, this jessie update has been ignored for the second point release in a row. It is now outdated until I update it to include the security fixes that have been released in the mean time but I m not yet sure that I will do it the lack of cooperation of the release team for that kind of request is discouraging. I sponsored multiple uploads of dolibarr (on security update notably) and tcpdf (to fix one RC bug). Thanks See you next month for a new summary of my activities.

No comment Liked this article? Click here. My blog is Flattr-enabled.

29 February 2016

Antonio Terceiro: Debian Ruby Sprint 2016 - day 1

This year s Debian Ruby team sprint started today here at Curitiba. Everyone arrived fine, and we started working at the meeting room we have booked for the week at Curitiba campus of the Federal Technical University of Paran . The room is at the Department of Business and Community Relations , what makes a lot of sense! :-) The day started with a quick setup, with a simple 8-port switch, and a couple of power strips. It tooks us a few minutes to figure what was blocked or not on the corporate network, and almost everyone who needs connections that are usually blocked in such environments already had their VPN setups so we were able to get started right after that. We are taking notes live on mozilla s piblic etherpad site Today we accomplished quite a lot:

Steve Kemp: If line-noise is a program, all fuzzers are developers

Recently I had a conversation with a programmer who repeated the adage that programming in perl consists of writing line-noise. This isn't true but it reminded me of my love of fuzzers. Fuzzers are often used to generate random input files which are fed to tools, looking for security problems, segfaults, and similar hilarity. To the untrained eye the output of most fuzzers is essentially line-noise, since you often start with a valid input file and start flipping bits, swapping bytes, and appending garbage. Anyway this made me wonder what happens if you fed random garbage into a perl interpreter? I wasn't brave enough to try it, because knowing my luck the fuzzer would write a program like so:
system( "rm -rf /home/steve" );
But I figured it was still an interesting idea, and I could have a go at fuzzing something else. I picked gawk, the GNU implementation of awk because the codebase is pretty small, and I understand it reasonably well. Almost immediately my fuzzer found some interesting segfaults and problems. Here's a nice simple example:
 $ gawk 'for (i = ) in steve kemp rocks'
 ..
 gawk: cmd. line:1: fatal error: internal error: segfault
 Aborted
I look forward to seeing what happens when other people fuzz perl..

12 January 2016

Antoine Beaupr : The Downloadable Internet

How corporations killed the web I have read with fascination what we would have called before a blog post, except it was featured on The Guardian: Iran's blogfather: Facebook, Instagram and Twitter are killing the web The "blogfather" is Hossein Derakshan or h0d3r, an author from Teheran that was jailed for almost a decade for his blogging. The article is very interesting both because it shows how fast things changed in the last few years, technology-wise, but more importantly, how content-free the web have become, where Facebook's last acquisition, Instagram, is not even censored by Iran. Those platforms have stopped being censored, not because of democratic progress but because they have become totally inoffensive (in the case of Iran) or become a tool of surveillance for the government and targeted advertisement for companies (in the case of, well, most of the world). This struck a chord, personally, at the political level: we are losing control of the internet (if we ever had it). The defeat isn't directly political: we have some institutions like ICANN and the IETF that we can still have an effect on, even if only at the technological level. The defeat is economic, and, of course, through economy comes enormous power. That defeat meant that we have first lost free and open access to the internet (yes, dialup used to be free) and then free hosting of our content (no, Google and Facebook are not free, you are the product). This marked a major change in the way content is treated online. H0d3r explains this as the shift from a link-based internet to a stream-based internet, a "deparure from a books-internet towards a television-internet". I have been warning about this "television-internet" in my talks and conversation for a while and with Netflix taking the crown off Youtube (and making you pay for it, of course), we can assuredly say that H0d3r is right and the television, far from disappearing, is finally being resurrected and taking over the internet.

The Downloadable internet and open standards But I would like to add to that: it is not merely that we had "links" before. We had, and still have, open standards. This made the internet "downloadable" (and by extension, uploadable) and decentralized. (In fact, I still remember my earlier days on the web when I would actually download images (as in "right-click" and "Save as..." images, not just have the browser download and display it on the fly). I would download images because they were big! It could take a minute or sometimes more to download images on older modems. Later, I would do the same with music: I would download WAV files before the rise of the MP3 format, of which I ended up building a significant collection (just fair use copies from friends and owned CDs, of course) and eventually video files.) The downloadable internet is what still allows me to type this article in a text editor, without internet access, while reading H0d3r's blog post on my e-reader, because I downloaded his article off an RSS feed. It is what makes it possible for anyone to download a full copy of this blog post and connected web pages as a git repository and this way get the full history of modifications on all the pages, but also be able to edit it offline and push modifications back in. Wikipedia is downloadable (there are even offline apps for your phone). Open standards like RSS feeds and HTML are downloadable. Heck, even the Internet Archive is downloadable (and I mean, all of it, not just the parts you want), surprisingly enough.

The app-based internet and proprietary software App-based websites like Google Plus and Facebook are not really downloadable. They are meant to be browsed through an app, so what you actually see through your web browser is really more an application, downloaded software than a downloaded piece of content. If you turn off Javascript, you will see that visiting Facebook actually shows no content: everything is downloaded on the fly by an application itself downloaded, on the fly, by your browser. In a way, your browser has become an operating system that runs proprietary, untrusted and unverified applications from the web. (The software is generally completely proprietary, except some frameworks that are published as free software in what looks like the lenient act of a godly king, but is actually more an economic decision of a clever corporation which outsources, for free, R&D and testing to the larger free software community. The real "secret sauce" is basically always proprietary, if only so that we don't freak out on stuff like PRISM that reports everything we do to the government.) Technology is political. This new "app design" is not a simple optimization or an cosmetic accident of a fancy engineer: by moving content through an application, Facebook, Twitter and the like can see exactly what you do on a web page, what you actually read (as opposed to what you click on) and how long. By adding a proprietary interface between you and the content online, the advertisement-surveillance complex can track every move you make online. This is a very fine-tuned surveillance system, and because of the App, you cannot escape it. You cannot share the content outside of Facebook, as you can't download it. Or at least, it's not obvious how you can. Projects like youtube-dl are doing an amazing job reverse-engineering what is becoming the proprietary Youtube streaming protocol, which is constantly changing and is not really documented. But it's a hack: it's a Sisyphus struggle which is bound to fail, and it does, all the time, until we figure out how to either turn those corporations into good netizens respecting and contributing to open standards (unlikely) or destroy those corporations (most likely). You are trapped in their walled garden. No wonder internet.org is Facebook only: for most people nowadays, the internet is the web, and the web is Facebook, Twitter and Google, or an iPad with a bunch of apps, each their own cute little walled garden, crafted just for you. If you think you like the Internet, you should really reconsider what you are watching, what you are consuming, or rather, how it is consuming you. There are alternatives. Facebook is a though nut to crack for free software activists because we lack the critical mass. But Facebook it is also an addiction for a lot of people, and spending less time on that spying machine could be a great improvement for you I am sure. For everything else, we have good free software alternatives and open standards, use them. "Big brother ain't watching you, you're watching him." - CRASS, Nineteen Eighty Bore (audio)

15 December 2015

Bartosz Feński: Once again two full-time days to work on Debian

Thanks to my employer I had opportunity to spent two days just working on my packages.
I know it s kinda sad that I have to wait for these two special days to do my (volunteer but still) job. Anyway. I was able to dig through all changes in our policy and standards and update some of my packages. Changes include: potrace (1.13-1) unstable; urgency=low * The Akamai Technologies paid volunteer days release.
* New upstream version.
* Bumped Standards-Version (no changes needed) Bartosz Fenski Thu, 10 Dec 2015 10:37:54 +0100 ipcalc (0.41-5) unstable; urgency=low * The Akamai Technologies paid volunteer days release.
* Updating to the newest standards of basically everything.
Very cool experience
* CGI script is now optional and put into examples (Closes: #763032) Bartosz Fenski Thu, 10 Dec 2015 13:51:12 +0100 dibbler (1.0.1-1) unstable; urgency=low * The Akamai Technologies paid volunteer days release.
* New upstream release (Closes: #780232, #795493)
fixes segfaults in TClntCfgMgr::validateConfig (Closes: #732697)
* Includes debugging packages (Closes: #732707)
* Bumped Standards-Version (no changes needed) Bartosz Fenski Thu, 10 Dec 2015 13:33:56 +0100 calcurse (4.0.0-1) unstable; urgency=low * The Akamai Technologies paid volunteer days release.
* New upstream version.
* Properly handles apts file with new line sign (Closes: #749282)
* Explicitly uses autotools-dev dependency.
* Bumped Standards-Version. (no changes needed) Bartosz Fenski Mon, 14 Dec 2015 10:55:30 +0100 libstatgrab (0.91-1) unstable; urgency=low * The Akamai Technologies paid volunteer days release.
* New upstream version (Closes: #804480)
* ACK NMUs, thanks Manuel!
* Bumped Standards-Version (no changes needed) Bartosz Fenski Mon, 14 Dec 2015 14:27:37 +0100 httpie (0.9.2-1) unstable; urgency=low * The Akamai Technologies paid volunteer days release.
* ACK previous NMU, thanks a lot Vincent!
* Bumped required python-requests version (Closes: #802540) Bartosz Fenski Mon, 14 Dec 2015 15:39:37 +0100 These uploads fixed 7 bugs and 9 lintian warnings/errors. Except that I reviewed mydumper package update and I uploaded it for Mateusz Kijowski who tries to became NM. Thanks Akamai!

1 November 2015

Steinar H. Gunderson: YUV color primaries

Attention: If these two videos don't both look identical (save for rounding errors) to each other and to this slide, it has broken understanding of YUV color primaries, and will render lots of perfectly normal video subtly off in color, one way or the other. Remux in MP4 instead of MPEG-TS here, for easier testing in browsers etc.: First, second. Chrome passes with perfect marks, Iceweasel segfaults on both (GStreamer's quality or lack thereof continues to amaze me). MPlayer and VLC both get one of them wrong (although VLC gets it more right if you use its screenshot function to save a PNG to disk, so check what's actually on the screen); ffmpeg with PNG output gets it right but ffplay doesn't. Edit to add: The point is the stable picture, not the flickering in the first few frames, of course. The video was encoded quite hastily.

26 October 2015

Simon Josefsson: Combining Dnsmasq and Unbound

For my home office network I have been using Dnsmasq for some time. Dnsmasq provides me with DNS, DHCP, DHCPv6, and IPv6 Router Advertisement. I run dnsmasq on a Debian Jessie server, but it works similar with OpenWRT if you want to use a smaller device. My entire /etc/dnsmasq.d/local configuration used to look like this:
dhcp-authoritative
interface=eth1
read-ethers
dhcp-range=192.168.1.100,192.168.1.150,12h
dhcp-range=2001:9b0:104:42::100,2001:9b0:104:42::1500
dhcp-option=option6:dns-server,[::]
enable-ra
Here dhcp-authoritative enable DHCP. interface=eth1 says to listen on eth1 only, which is my internal (IPv4 NAT) network. I try to keep track of the MAC address of all my devices in a /etc/ethers file, so I use read-ethers to have dnsmasq give stable IP addresses for them. The dhcp-range is used to enable DHCP and DHCPv6 on my internal network. The dhcp-option=option6:dns-server,[::] statement is needed to inform the DHCP clients of the DNS resolver s IPv6 address, otherwise they would only get the IPv4 DNS server address. The enable-ra parameter enables IPv6 router advertisement on the internal network, thereby removing the need to run radvd too useful since I prefer to use copyleft software. Recently I had a desire to use DNSSEC, and enabled it in Dnsmasq using the following statements:
dnssec
trust-anchor=.,19036,8,2,49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5
dnssec-check-unsigned
The dnssec keyword enable DNSSEC validation in dnsmasq, using the indicated trust-anchor (get the root-anchors from IANA). The dnssec-check-unsigned deserves some more discussion. The dnsmasq manpage describes it as follows:
As a default, dnsmasq does not check that unsigned DNS replies are legitimate: they are assumed to be valid and passed on (without the authentic data bit set, of course). This does not protect against an attacker forging unsigned replies for signed DNS zones, but it is fast. If this flag is set, dnsmasq will check the zones of unsigned replies, to ensure that unsigned replies are allowed in those zones. The cost of this is more upstream queries and slower performance.
For example, this means that dnsmasq s DNSSEC functionality is not secure against active man-in-the-middle attacks between dnsmasq and the DNS server it is using. Even if example.org used DNSSEC properly, an attacker could fake that it was unsigned to dnsmasq, and I would get potentially incorrect values in return. We all know that the Internet is not a secure place, and your threat model should include active attackers. I believe this mode should be the default in dnsmasq, and users should have to configure dnsmasq to not be in that mode if they really want to (with the obvious security warning). Running with this enabled for a couple of days resulted in frustration about not being able to reach a couple of domains. The behaviour was that my clients would hang indefinitely or get a SERVFAIL, both resulting in lack of service. You can enable query logging in dnsmasq with log-queries and enabling this I noticed three distinct form of error patterns:
jow13gw dnsmasq 460 - -  forwarded www.fritidsresor.se to 213.80.101.3
jow13gw dnsmasq 460 - -  validation result is BOGUS
jow13gw dnsmasq 547 - -  reply cloudflare-dnssec.net is BOGUS DNSKEY
jow13gw dnsmasq 547 - -  validation result is BOGUS
jow13gw dnsmasq 547 - -  reply linux.conf.au is BOGUS DS
jow13gw dnsmasq 547 - -  validation result is BOGUS
The first only happened intermittently, the second did not cause any noticeable problem, and the final one was reproducible. To be fair, I only found the last example after starting to search for problem reports (see post confirming bug). At this point, I had a confirmed bug in dnsmasq that affect my use-case. I want to use official packages from Debian on this machine, so installing newer versions manually is not an option. So I started to look into alternatives for DNS resolving, and quickly found Unbound. Installing it was easy:
apt-get install unbound
unbound-control-setup 
I created a local configuration file in /etc/unbound/unbound.conf.d/local.conf as follows:
server:
	interface: 127.0.0.1
	interface: ::1
	interface: 192.168.1.2
	interface: 2001:9b0:104:42::2
	access-control: 127.0.0.1 allow
	access-control: ::1 allow
	access-control: 192.168.1.2/24 allow
	access-control: 2001:9b0:104:42::2/64 allow
	outgoing-interface: 155.4.17.2
	outgoing-interface: 2001:9b0:1:1a04::2
#	log-queries: yes
#	verbosity: 2
The interface keyword determine which IP addresses to listen on, here I used the loopback interface and the local address of the physical network interface for my internal network. The access-control allows recursive DNS resolving from those networks. And outgoing-interface specify my external Internet-connected interface. log-queries and/or verbosity are useful for debugging. To make things work, dnsmasq has to stop providing DNS services. This can be achieved with the port=0 keyword, however that will also disable informing DHCP clients about the DNS server to use. So this has to be added in manually. I ended up adding the two following lines to /etc/dnsmasq.d/local:
port=0
dhcp-option=option:dns-server,192.168.1.2
Restarting unbound and dnsmasq now leads to working (and secure) internal DNSSEC-aware name resolution over both IPv4 and IPv6. I can verify that resolution works, and that Unbound verify signatures and reject bad domains properly with dig as below, or use online DNSSEC resolver test page although I m not sure how confident you can be in the result from that page.
$ host linux.conf.au
linux.conf.au has address 192.55.98.190
linux.conf.au mail is handled by 1 linux.org.au.
$ host sigfail.verteiltesysteme.net
;; connection timed out; no servers could be reached
$ 
I use Munin to monitor my services, and I was happy to find a nice Unbound Munin plugin. I installed the file in /usr/share/munin/plugins/ and created a Munin plugin configuration file /etc/munin/plugin-conf.d/unbound as follows:
[unbound*]
user root
env.statefile /var/lib/munin-node/plugin-state/root/unbound.state
env.unbound_conf /etc/unbound/unbound.conf
env.unbound_control /usr/sbin/unbound-control
env.spoof_warn 1000
env.spoof_crit 100000
I run munin-node-configure --shell sh to enable it. To work unbound has to be configured as well, so I create a /etc/unbound/unbound.conf.d/munin.conf as follows.
server:
	extended-statistics: yes
	statistics-cumulative: no
	statistics-interval: 0
remote-control:
	control-enable: yes
The graphs may be viewed at my munin instance.

12 October 2015

Steve Kemp: So about that idea of using ssh-keygen on untrusted input?

My previous blog post related to using ssh-keygen to generate fingerprints from SSH public keys. At the back of my mind was the fear that running the command against untrusted, user-supplied, keys might be a bad plan. So I figured I'd do some fuzzing to reassure myself. The most excellent LWN recently published a piece on Fuzzing with american fuzzy lop, so with that to guide me I generated a pair of SSH public keys, and set to work. Two days later I found an SSH public key that would make ssh-keygen segfault, and equally the SSH client (same parser), so that was a shock. The good news is that my Perl module to fingerprint keys is used like so:
my $helper = SSHKey::Fingerprint->new( key => "ssh ...." );
if ( $helper->valid() )  
   my $fingerprint = $helper->fingerprint();
   ...
 
The validity-test catches my bogus key, so in my personal use-cases this OK. That said it's a surprise to see this:
skx@shelob ~ $ ssh -i key.trigger.pub steve@ssh.steve.org.uk 
Segmentation fault
Similarly running "ssh-keygen -l -f ~/key.trigger.pub" results in an identical segfault. In practice this is a low risk issue, hence mentioning it, and filing the bug-report publicly, even if code execution is possible. Because in practice how many times do people fingerprint keys from unknown sources? Except for things like githubs key management page? Some people probably do it, but I assume they do it infrequently and only after some minimal checking. Anyway we'll say this is my my first security issue of 2015, we'll call it #roadhouse, and we'll get right on trademarking the term, designing the logo, and selling out for all the filthy filthy lucre ;)

4 October 2015

Jonathan Carter: Long Overdue Debconf 15 Post

Debconf 15 In August (that was 2 months ago, really!?) I attended DebCamp and DebConf in Heidelberg, Germany. This blog post is somewhat belated due to the debbug (flu obtained during DebConf) and all the catching up I had to do since then. Debcamp Debcamp was great, I got to hack on some of my Python related packages that were in need of love for a long time and also got to spend a lot of time tinkering with VLC for the Video Team. Even better than that, I caught up with a lot of great people I haven t seen in ages (and met new ones) and stayed up waaaaay too late drinking beer, playing Mao and watching meteor showers. Debconf At Debconf, I gave a short talk about AIMS Desktop (slides) but also expanded on the licensing problems we ve had with Ubuntu on that project. Not all was bleak on the Ubuntu front though, some Ubuntu/Canonical folk were present at DebConf and said that they d gladly get involved with porting Ubiquity (the Ubuntu installer, a front-end to d-i) to Debian. That would certainly be useful to many derivatives including potentiall AIMS Desktop if it were to move over to Debian. AIMS Desktop talk slides: We re hosting DebConf in Cape Town next year and did an introduction during a plenary (slides). It was interesting spending some time with the DC15 team and learning how they work, it s amazing all the detail they have to care about and how easy they made it look from the outside, I hope the DC16 team will pull that off as well. Debconf 16 Slides: DC16 at DC15 talk DebConf 16 team members present at DebConf16 during DC16 presentation: I uploaded my photos to DebConf Gallery, Facebook and Google, take your pick ;-), many sessions were recorded, catch them on video.debian.net. If I had to summarize everything that I found interesting I d have to delay posting this entry even further, topics that were particularly interesting were: Pollito s First Trip to Africa In my state of flu with complete lack of concentration for anything work related, I went ahead and made a little short story documenting Pollito s (the DebConf mascot chicken) first trip to Africa. It s silly but it was fun to make and some people enjoyed it ^_^ Well, what else can I say? DebConf 15 was a blast! Hope to see you at Debconf 16!

30 September 2015

Chris Lamb: Free software activities in September 2015

Inspired by Rapha l Hertzog, here is a monthly update covering a large part of what I have been doing in the free software world:
Debian The Reproducible Builds project was also covered in depth on LWN as well as in Lunar's weekly reports (#18, #19, #20, #21, #22).
Uploads
  • redis A new upstream release, as well as overhauling the systemd configuration, maintaining feature parity with sysvinit and adding various security hardening features.
  • python-redis Attempting to get its Debian Continuous Integration tests to pass successfully.
  • libfiu Ensuring we do not FTBFS under exotic locales.
  • gunicorn Dropping a dependency on python-tox now that tests are disabled.



RC bugs


I also filed FTBFS bugs against actdiag, actdiag, bangarang, bmon, bppphyview, cervisia, choqok, cinnamon-control-center, clasp, composer, cpl-plugin-naco, dirspec, django-countries, dmapi, dolphin-plugins, dulwich, elki, eqonomize, eztrace, fontmatrix, freedink, galera-3, golang-git2go, golang-github-golang-leveldb, gopher, gst-plugins-bad0.10, jbofihe, k3b, kalgebra, kbibtex, kde-baseapps, kde-dev-utils, kdesdk-kioslaves, kdesvn, kdevelop-php-docs, kdewebdev, kftpgrabber, kile, kmess, kmix, kmldonkey, knights, konsole4, kpartsplugin, kplayer, kraft, krecipes, krusader, ktp-auth-handler, ktp-common-internals, ktp-text-ui, libdevice-cdio-perl, libdr-tarantool-perl, libevent-rpc-perl, libmime-util-java, libmoosex-app-cmd-perl, libmoosex-app-cmd-perl, librdkafka, libxml-easyobj-perl, maven-dependency-plugin, mmtk, murano-dashboard, node-expat, node-iconv, node-raw-body, node-srs, node-websocket, ocaml-estring, ocaml-estring, oce, odb, oslo-config, oslo.messaging, ovirt-guest-agent, packagesearch, php-svn, php5-midgard2, phpunit-story, pike8.0, plasma-widget-adjustableclock, plowshare4, procps, pygpgme, pylibmc, pyroma, python-admesh, python-bleach, python-dmidecode, python-libdiscid, python-mne, python-mne, python-nmap, python-nmap, python-oslo.middleware, python-riemann-client, python-traceback2, qdjango, qsapecng, ruby-em-synchrony, ruby-ffi-rzmq, ruby-nokogiri, ruby-opengraph-parser, ruby-thread-safe, shortuuid, skrooge, smb4k, snp-sites, soprano, stopmotion, subtitlecomposer, svgpart, thin-provisioning-tools, umbrello, validator.js, vdr-plugin-prefermenu, vdr-plugin-vnsiserver, vdr-plugin-weather, webkitkde, xbmc-pvr-addons, xfsdump & zanshin.

Next.

Previous.